Goto

Collaborating Authors

 racist and sexist


This is Why New AI Tools Are Racist And Sexist (And What To Do About It)

#artificialintelligence

This year was a year of AI; I believe that a lot of industries won't be the same thanks to all the AI-generated content floating out there.

  new ai tool, racist and sexist
  Industry: Law > Civil Rights & Constitutional Law (0.40)

Study: We're Teaching Artificial Intelligence to Be Just as Racist and Sexist as Humans

#artificialintelligence

We live in a world that's increasingly being shaped by complex algorithms and interactive artificial intelligence assistants who help us plot out our days and get from point A to point B. According to a new Princeton study, though, the engineers responsible for teaching these AI programs things about humans are also teaching them how to be racist, sexist assholes. The study, published in today's edition of Science magazine by Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan, focuses on machine learning, the process by which AI programs begin to think by making associations based on patterns observed in mass quantities of data. In a completely neutral vacuum, this would mean that AI would learn to provide responses based solely on objective, data-driven facts. But because the data sets fed to the AI are selected and influenced by humans, there's a degree to which certain biases become a part of the AI's diet. To demonstrate this, Caliskan and her team created a modified version of an Implicit Association Test, an exercise that tasks participants to quickly associate concrete ideas like people of color and women with abstract concepts like goodness and evil.


It's Too Late--We've Already Taught AI to Be Racist and Sexist

#artificialintelligence

They say that kids aren't born sexist or racist--hate is taught. Artificial intelligence is the same way, and humans are fabulous teachers. ProPublica reported, for example, that an algorithm used to to predict the likelihood of convicts committing future crime tends to tag black folks as higher risk than whites. Despite the oft-repeated claim that such data-driven approaches are more objective than past methods of determining the risk of recidivism or anything else, it's clear that our very human biases have rubbed off on our machines. Consider the case of Microsoft's simple Tay bot, which sucked up all the slurs and racist opinions that Twitter users threw at it and ended up spouting Nazi drivel.


Can Computer Programs Be Racist And Sexist?

NPR Technology

Last summer, Jacky Alciné learned just how biased computers can be. Alciné, who is African-American, took a bunch of pictures with friends at a concert. Later he loaded them into Google Photos, which stores and automatically organizes images. Google's software is able to group together pictures of a particular friend, or pictures of dogs, cats, etc. But when it labeled a picture of one of Alciné's friends, who is also African-American, it left him speechless.